成像,散射和光谱是理解和发现新功能材料的基础。自动化和实验技术的当代创新导致这些测量更快,分辨率更高,从而产生了大量的分析数据。这些创新在用户设施和同步射击光源时特别明显。机器学习(ML)方法经常开发用于实时地处理和解释大型数据集。然而,仍然存在概念障碍,进入设施一般用户社区,通常缺乏ML的专业知识,以及部署ML模型的技术障碍。在此,我们展示了各种原型ML模型,用于在国家同步光源II(NSLS-II)的多个波束线上在飞行分析。我们谨慎地描述这些示例,专注于将模型集成到现有的实验工作流程中,使得读者可以容易地将它们自己的ML技术与具有普通基础设施的NSLS-II或设施的实验中的实验。此处介绍的框架展示了几乎没有努力,多样化的ML型号通过集成到实验编程和数据管理的现有Blueske套件中与反馈回路一起运行。
translated by 谷歌翻译
Self-training has been shown to be helpful in addressing data scarcity for many domains, including vision, speech, and language. Specifically, self-training, or pseudo-labeling, labels unsupervised data and adds that to the training pool. In this work, we investigate and use pseudo-labeling for a recently proposed novel setup: joint transcription and translation of speech, which suffers from an absence of sufficient data resources. We show that under such data-deficient circumstances, the unlabeled data can significantly vary in domain from the supervised data, which results in pseudo-label quality degradation. We investigate two categories of remedies that require no additional supervision and target the domain mismatch: pseudo-label filtering and data augmentation. We show that pseudo-label analysis and processing as such results in additional gains on top of the vanilla pseudo-labeling setup resulting in total improvements of up to 0.6% absolute WER and 2.2 BLEU points.
translated by 谷歌翻译
Simulating rigid collisions among arbitrary shapes is notoriously difficult due to complex geometry and the strong non-linearity of the interactions. While graph neural network (GNN)-based models are effective at learning to simulate complex physical dynamics, such as fluids, cloth and articulated bodies, they have been less effective and efficient on rigid-body physics, except with very simple shapes. Existing methods that model collisions through the meshes' nodes are often inaccurate because they struggle when collisions occur on faces far from nodes. Alternative approaches that represent the geometry densely with many particles are prohibitively expensive for complex shapes. Here we introduce the Face Interaction Graph Network (FIGNet) which extends beyond GNN-based methods, and computes interactions between mesh faces, rather than nodes. Compared to learned node- and particle-based methods, FIGNet is around 4x more accurate in simulating complex shape interactions, while also 8x more computationally efficient on sparse, rigid meshes. Moreover, FIGNet can learn frictional dynamics directly from real-world data, and can be more accurate than analytical solvers given modest amounts of training data. FIGNet represents a key step forward in one of the few remaining physical domains which have seen little competition from learned simulators, and offers allied fields such as robotics, graphics and mechanical design a new tool for simulation and model-based planning.
translated by 谷歌翻译
Continuous pseudo-labeling (PL) algorithms such as slimIPL have recently emerged as a powerful strategy for semi-supervised learning in speech recognition. In contrast with earlier strategies that alternated between training a model and generating pseudo-labels (PLs) with it, here PLs are generated in end-to-end manner as training proceeds, improving training speed and the accuracy of the final model. PL shares a common theme with teacher-student models such as distillation in that a teacher model generates targets that need to be mimicked by the student model being trained. However, interestingly, PL strategies in general use hard-labels, whereas distillation uses the distribution over labels as the target to mimic. Inspired by distillation we expect that specifying the whole distribution (aka soft-labels) over sequences as the target for unlabeled data, instead of a single best pass pseudo-labeled transcript (hard-labels) should improve PL performance and convergence. Surprisingly and unexpectedly, we find that soft-labels targets can lead to training divergence, with the model collapsing to a degenerate token distribution per frame. We hypothesize that the reason this does not happen with hard-labels is that training loss on hard-labels imposes sequence-level consistency that keeps the model from collapsing to the degenerate solution. In this paper, we show several experiments that support this hypothesis, and experiment with several regularization approaches that can ameliorate the degenerate collapse when using soft-labels. These approaches can bring the accuracy of soft-labels closer to that of hard-labels, and while they are unable to outperform them yet, they serve as a useful framework for further improvements.
translated by 谷歌翻译
Cancer is one of the leading causes of death worldwide. It is caused by a variety of genetic mutations, which makes every instance of the disease unique. Since chemotherapy can have extremely severe side effects, each patient requires a personalized treatment plan. Finding the dosages that maximize the beneficial effects of the drugs and minimize their adverse side effects is vital. Deep neural networks automate and improve drug selection. However, they require a lot of data to be trained on. Therefore, there is a need for machine-learning approaches that require less data. Hybrid quantum neural networks were shown to provide a potential advantage in problems where training data availability is limited. We propose a novel hybrid quantum neural network for drug response prediction, based on a combination of convolutional, graph convolutional, and deep quantum neural layers of 8 qubits with 363 layers. We test our model on the reduced Genomics of Drug Sensitivity in Cancer dataset and show that the hybrid quantum model outperforms its classical analog by 15% in predicting IC50 drug effectiveness values. The proposed hybrid quantum machine learning model is a step towards deep quantum data-efficient algorithms with thousands of quantum gates for solving problems in personalized medicine, where data collection is a challenge.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
This paper proposes a non-data-driven deep neural network for spectral image recovery problems such as denoising, single hyperspectral image super-resolution, and compressive spectral imaging reconstruction. Unlike previous methods, the proposed approach, dubbed Mixture-Net, implicitly learns the prior information through the network. Mixture-Net consists of a deep generative model whose layers are inspired by the linear and non-linear low-rank mixture models, where the recovered image is composed of a weighted sum between the linear and non-linear decomposition. Mixture-Net also provides a low-rank decomposition interpreted as the spectral image abundances and endmembers, helpful in achieving remote sensing tasks without running additional routines. The experiments show the MixtureNet effectiveness outperforming state-of-the-art methods in recovery quality with the advantage of architecture interpretability.
translated by 谷歌翻译
基于视觉的导航需要处理复杂的信息以做出以任务为导向的决策。应用包括自动驾驶机器人,自动驾驶汽车以及对人类的辅助愿景。该过程中的关键要素之一是在像素空间中提取和选择相关特征,以便基于操作选择,适合哪种机器学习技术。但是,在模拟中接受培训的深度强化学习代理人在现实世界中部署在现实世界中通常会表现出不满意的结果,这是因为感知差异称为$ \ textit {现实gap} $。尚未探索以弥合这一差距的方法是自我注意力。在本文中,我们(1)对基于3D环境的基于自我注意力的导航进行系统探索,并从不同的超参数集中观察到的行为,包括它们的概括能力; (2)目前的策略来提高代理的概括能力和导航行为; (3)展示在模拟中训练的模型如何能够实时处理现实世界图像。据我们所知,这是使用少于4000个参数成功导航3D动作空间的基于自我注意力的代理的首次演示。
translated by 谷歌翻译
深厚的增强学习表明,仅在原始像素中解决复杂的增强学习(RL)任务,可以实现超人表现。但是,它无法从以前学习的任务中重复使用知识来解决新的,看不见的知识。概括和重复使用知识是创建真正智能代理的基本要求。这项工作提出了一种基于针对RL任务的生成对抗网络模型的一对一转移学习的通用方法。
translated by 谷歌翻译
上下文:如今提供的电视连续剧数量很高。由于其大量数量,由于缺乏独创性,许多系列被取消了。问题:拥有一个决策支持系统,可以说明为什么某些节目取得了巨大的成功,或者不促进续签或开始演出的选择。解决方案:我们研究了由CW网络广播的系列箭头的情况,并使用了描述性和预测性建模技术来预测IMDB额定值。我们假设该情节的主题会影响用户的评估,因此数据集仅由该情节的导演,该情节所获得的评论数量,这是由潜在的Dirichlet分配提取的每个主题的百分比(LDA)的数量。情节的模型,来自Wikipedia的观众数量和IMDB的评分。 LDA模型是由单词组成的文档集合的生成概率模型。方法:在这项规范性研究中,使用了案例研究方法,并使用定量方法分析了结果。结果摘要:每个情节的特征,最能预测评分的模型是由于KNN模型的类似平方误差,但在测试阶段的标准偏差更好。可以用可接受的均方根误差为0.55预测IMDB评级。
translated by 谷歌翻译